Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Comput Biol Med ; 152: 106337, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36502695

RESUMO

Immunotherapy targeting immune checkpoint proteins, such as programmed cell death ligand 1 (PD-L1), has shown impressive outcomes in many clinical trials but only 20%-40% of patients benefit from it. Utilizing Combined Positive Score (CPS) to evaluate PD-L1 expression in tumour biopsies to identify patients with the highest likelihood of responsiveness to anti-PD-1/PD-L1 therapy has been approved by the Food and Drug Administration for several solid tumour types. Current CPS workflow requires a pathologist to manually score the two-colour PD-L1 chromogenic immunohistochemistry image. Multiplex immunofluorescence (mIF) imaging reveals the expression of an increased number of immune markers in tumour biopsies and has been used extensively in immunotherapy research. Recent rapid progress of Artificial Intelligence (AI)-based imaging analysis, particularly Deep Learning, provides cost effective and high-quality solutions to healthcare. In this article, we propose an imaging pipeline that utilizes three-colour mIF images (DAPI, PD-L1, and Pan-cytokeratin) as input and predicts the CPS using AI techniques. Our novel pipeline is composed of three modules employing algorithms of image processing, machine learning, and deep learning techniques. The first module of quality check (QC) detects and removes the image regions contaminated with sectioning and staining artefacts. The QC module ensures that only image regions free of the three common artefacts are used for downstream analysis. The second module of nuclear segmentation uses deep learning to segment and count nuclei in the DAPI images wherein our specialized method can accurately separate touching nuclei. The third module of cell phenotyping calculates CPS by identifying and counting PD-L1 positive cells and tumour cells. These modules are data-efficient and require only few manual annotations for training purposes. Using tumour biopsies from a clinical trial, we found that the CPS from the AI-based models shows a high Spearman correlation (78%, p = 0.003) to the pathologist-scored CPS.


Assuntos
Inteligência Artificial , Neoplasias , Humanos , Antígeno B7-H1/metabolismo , Neoplasias/diagnóstico por imagem , Imuno-Histoquímica , Imunofluorescência , Biomarcadores Tumorais/metabolismo
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 1711-1714, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891616

RESUMO

Molecular profiling of the tumor in addition to the histological tumor analysis can provide robust information for targeted cancer therapies. Often such data are not available for analysis due to processing delays, cost or inaccessibility. In this paper, we proposed a deep learning-based method to predict RNA-sequence expression (RNA-seq) from Hematoxylin and Eosin whole-slide images (H&E WSI) in head and neck cancer patients. Conventional methods utilize a patch-by-patch prediction and aggregation strategy to predict RNA-seq at a whole-slide level. However, these methods lose spatial-contextual relationships between patches that comprise morphology interactions crucial for predicting RNA-seq. We proposed a novel framework that employs a neural image compressor to preserve the spatial relationships between patches and generate a compressed representation of the whole-slide image, and a customized deep-learning regressor to predict RNA-seq from the compressed representation by learning both global and local features. We tested our proposed method on publicly available TCGA-HNSC dataset comprising 43 test patients for 10 oncogenes. Our experiments showed that the proposed method achieves a 4.12% higher mean correlation and predicts 6 out of 10 genes with better correlation than a state-of-the-art baseline method. Furthermore, we provided interpretability using pathway analysis of the best-predicted genes, and activation maps to highlight the regions in an H&E image that are the most salient of the RNA-seq prediction.Clinical relevance-The proposed method has the potential to discover genetic biomarkers directly from the histopathology images which could be used to pre-screen the patients before actual genetic testing thereby saving cost and time.


Assuntos
Neoplasias de Cabeça e Pescoço , Amarelo de Eosina-(YS) , Neoplasias de Cabeça e Pescoço/genética , Hematoxilina , Humanos , RNA/genética
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3205-3208, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891923

RESUMO

Nuclei segmentation in whole slide images (WSIs) stained with Hematoxylin and Eosin (H&E) dye, is a key step in computational pathology which aims to automate the laborious process of manual counting and segmentation. Nuclei segmentation is a challenging problem that involves challenges such as touching nuclei resolution, small-sized nuclei, size, and shape variations. With the advent of deep learning, convolution neural networks (CNNs) have shown a powerful ability to extract effective representations from microscopic H&E images. We propose a novel dual encoder Attention U-net (DEAU) deep learning architecture and pseudo hard attention gating mechanism, to enhance the attention to target instances. We added a new secondary encoder to the attention U-net to capture the best attention for a given input. Since H captures nuclei information, we propose a stain-separated H channel as input to the secondary encoder. The role of the secondary encoder is to transform attention prior to different spatial resolutions while learning significant attention information. The proposed DEAU performance was evaluated on three publicly available H&E data sets for nuclei segmentation from different research groups. Experimental results show that our approach outperforms other attention-based approaches for nuclei segmentation.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Núcleo Celular , Amarelo de Eosina-(YS) , Hematoxilina
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 3475-3478, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891988

RESUMO

Automated nuclei segmentation from immunofluorescence (IF) microscopic image is a crucial first step in digital pathology. A lot of research has been devoted to develop novel nuclei segmentation algorithms to give high performance on good quality images. However, fewer methods were developed for poor-quality images like out-of-focus (blurry) data. In this work, we take a principled approach to study the performance of nuclei segmentation algorithms on out-of-focus images for different levels of blur. A deep learning encoder-decoder framework with a novel Y forked decoder is proposed here. The two fork ends are tied to segmentation and deblur output. The addition of a separate deblurring task in the training paradigm helps to regularize the network on blurry images. Our proposed method accurately predicts the instance nuclei segmentation on sharp as well as out-of-focus images. Additionally, predicted deblurred image provides interpretable insights to experts. Experimental analysis on the Human U2OS cells (out-of-focus) dataset shows that our algorithm is robust and outperforms the state-of-the-art methods.


Assuntos
Algoritmos , Núcleo Celular , Imunofluorescência , Humanos , Microscopia de Fluorescência , Coloração e Rotulagem
5.
IEEE Trans Med Imaging ; 36(7): 1550-1560, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28287963

RESUMO

Nuclear segmentation in digital microscopic tissue images can enable extraction of high-quality features for nuclear morphometrics and other analysis in computational pathology. Conventional image processing techniques, such as Otsu thresholding and watershed segmentation, do not work effectively on challenging cases, such as chromatin-sparse and crowded nuclei. In contrast, machine learning-based segmentation can generalize across various nuclear appearances. However, training machine learning algorithms requires data sets of images, in which a vast number of nuclei have been annotated. Publicly accessible and annotated data sets, along with widely agreed upon metrics to compare techniques, have catalyzed tremendous innovation and progress on other image classification problems, particularly in object recognition. Inspired by their success, we introduce a large publicly accessible data set of hematoxylin and eosin (H&E)-stained tissue images with more than 21000 painstakingly annotated nuclear boundaries, whose quality was validated by a medical doctor. Because our data set is taken from multiple hospitals and includes a diversity of nuclear appearances from several patients, disease states, and organs, techniques trained on it are likely to generalize well and work right out-of-the-box on other H&E-stained images. We also propose a new metric to evaluate nuclear segmentation results that penalizes object- and pixel-level errors in a unified manner, unlike previous metrics that penalize only one type of error. We also propose a segmentation technique based on deep learning that lays a special emphasis on identifying the nuclear boundaries, including those between the touching or overlapping nuclei, and works well on a diverse set of test images.


Assuntos
Aprendizado de Máquina , Algoritmos , Núcleo Celular , Humanos , Processamento de Imagem Assistida por Computador , Coloração e Rotulagem
6.
IEEE Trans Med Imaging ; 35(8): 1962-71, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27164577

RESUMO

Staining and scanning of tissue samples for microscopic examination is fraught with undesirable color variations arising from differences in raw materials and manufacturing techniques of stain vendors, staining protocols of labs, and color responses of digital scanners. When comparing tissue samples, color normalization and stain separation of the tissue images can be helpful for both pathologists and software. Techniques that are used for natural images fail to utilize structural properties of stained tissue samples and produce undesirable color distortions. The stain concentration cannot be negative. Tissue samples are stained with only a few stains and most tissue regions are characterized by at most one effective stain. We model these physical phenomena that define the tissue structure by first decomposing images in an unsupervised manner into stain density maps that are sparse and non-negative. For a given image, we combine its stain density maps with stain color basis of a pathologist-preferred target image, thus altering only its color while preserving its structure described by the maps. Stain density correlation with ground truth and preference by pathologists were higher for images normalized using our method when compared to other alternatives. We also propose a computationally faster extension of this technique for large whole-slide images that selects an appropriate patch sample instead of using the entire image to compute the stain color basis.


Assuntos
Corantes/química , Cor , Microscopia , Software , Coloração e Rotulagem
7.
J Pathol Inform ; 7: 17, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27141322

RESUMO

CONTEXT: Color normalization techniques for histology have not been empirically tested for their utility for computational pathology pipelines. AIMS: We compared two contemporary techniques for achieving a common intermediate goal - epithelial-stromal classification. SETTINGS AND DESIGN: Expert-annotated regions of epithelium and stroma were treated as ground truth for comparing classifiers on original and color-normalized images. MATERIALS AND METHODS: Epithelial and stromal regions were annotated on thirty diverse-appearing H and E stained prostate cancer tissue microarray cores. Corresponding sets of thirty images each were generated using the two color normalization techniques. Color metrics were compared for original and color-normalized images. Separate epithelial-stromal classifiers were trained and compared on test images. Main analyses were conducted using a multiresolution segmentation (MRS) approach; comparative analyses using two other classification approaches (convolutional neural network [CNN], Wndchrm) were also performed. STATISTICAL ANALYSIS: For the main MRS method, which relied on classification of super-pixels, the number of variables used was reduced using backward elimination without compromising accuracy, and test - area under the curves (AUCs) were compared for original and normalized images. For CNN and Wndchrm, pixel classification test-AUCs were compared. RESULTS: Khan method reduced color saturation while Vahadane reduced hue variance. Super-pixel-level test-AUC for MRS was 0.010-0.025 (95% confidence interval limits ± 0.004) higher for the two normalized image sets compared to the original in the 10-80 variable range. Improvement in pixel classification accuracy was also observed for CNN and Wndchrm for color-normalized images. CONCLUSIONS: Color normalization can give a small incremental benefit when a super-pixel-based classification method is used with features that perform implicit color normalization while the gain is higher for patch-based classification methods for classifying epithelium versus stroma.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...